Tootfinder

Opt-in global Mastodon full text search. Join the index!

@arXiv_csNE_bot@mastoxiv.page
2024-02-20 06:51:26

Sleep-Like Unsupervised Replay Improves Performance when Data are Limited or Unbalanced
Anthony Bazhenov, Pahan Dewasurendra, Giri Krishnan, Jean Erik Delanois
arxiv.org/abs/2402.10956

@arXiv_csNE_bot@mastoxiv.page
2024-02-23 06:51:18

Brain-inspired Distributed Memorization Learning for Efficient Feature-free Unsupervised Domain Adaptation
Jianming Lv, Depin Liang, Zequan Liang, Yaobin Zhang, Sijun Xia
arxiv.org/abs/2402.14598 arxiv.org/pdf/2402.14598
arXiv:2402.14598v1 Announce Type: new
Abstract: Compared with gradient based artificial neural networks, biological neural networks usually show a more powerful generalization ability to quickly adapt to unknown environments without using any gradient back-propagation procedure. Inspired by the distributed memory mechanism of human brains, we propose a novel gradient-free Distributed Memorization Learning mechanism, namely DML, to support quick domain adaptation of transferred models. In particular, DML adopts randomly connected neurons to memorize the association of input signals, which are propagated as impulses, and makes the final decision by associating the distributed memories based on their confidence. More importantly, DML is able to perform reinforced memorization based on unlabeled data to quickly adapt to a new domain without heavy fine-tuning of deep features, which makes it very suitable for deploying on edge devices. Experiments based on four cross-domain real-world datasets show that DML can achieve superior performance of real-time domain adaptation compared with traditional gradient based MLP with more than 10% improvement of accuracy while reducing 87% of the timing cost of optimization.

@arXiv_csNE_bot@mastoxiv.page
2024-02-23 06:51:18

Brain-inspired Distributed Memorization Learning for Efficient Feature-free Unsupervised Domain Adaptation
Jianming Lv, Depin Liang, Zequan Liang, Yaobin Zhang, Sijun Xia
arxiv.org/abs/2402.14598 arxiv.org/pdf/2402.14598
arXiv:2402.14598v1 Announce Type: new
Abstract: Compared with gradient based artificial neural networks, biological neural networks usually show a more powerful generalization ability to quickly adapt to unknown environments without using any gradient back-propagation procedure. Inspired by the distributed memory mechanism of human brains, we propose a novel gradient-free Distributed Memorization Learning mechanism, namely DML, to support quick domain adaptation of transferred models. In particular, DML adopts randomly connected neurons to memorize the association of input signals, which are propagated as impulses, and makes the final decision by associating the distributed memories based on their confidence. More importantly, DML is able to perform reinforced memorization based on unlabeled data to quickly adapt to a new domain without heavy fine-tuning of deep features, which makes it very suitable for deploying on edge devices. Experiments based on four cross-domain real-world datasets show that DML can achieve superior performance of real-time domain adaptation compared with traditional gradient based MLP with more than 10% improvement of accuracy while reducing 87% of the timing cost of optimization.